Search Results

Documents authored by Chen, Lijie


Document
Track A: Algorithms, Complexity and Games
New PRGs for Unbounded-Width/Adaptive-Order Read-Once Branching Programs

Authors: Lijie Chen, Xin Lyu, Avishay Tal, and Hongxun Wu

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
We give the first pseudorandom generators with sub-linear seed length for the following variants of read-once branching programs (roBPs): 1) First, we show there is an explicit PRG of seed length O(log²(n/ε)log(n)) fooling unbounded-width unordered permutation branching programs with a single accept state, where n is the length of the program. Previously, [Lee-Pyne-Vadhan RANDOM 2022] gave a PRG with seed length Ω(n) for this class. For the ordered case, [Hoza-Pyne-Vadhan ITCS 2021] gave a PRG with seed length Õ(log n ⋅ log 1/ε). 2) Second, we show there is an explicit PRG fooling unbounded-width unordered regular branching programs with a single accept state with seed length Õ(√{n ⋅ log 1/ε} + log 1/ε). Previously, no non-trivial PRG (with seed length less than n) was known for this class (even in the ordered setting). For the ordered case, [Bogdanov-Hoza-Prakriya-Pyne CCC 2022] gave an HSG with seed length Õ(log n ⋅ log 1/ε). 3) Third, we show there is an explicit PRG fooling width w adaptive branching programs with seed length O(log n ⋅ log² (nw/ε)). Here, the branching program can choose an input bit to read depending on its current state, while it is guaranteed that on any input x ∈ {0,1}ⁿ, the branching program reads each input bit exactly once. Previously, no PRG with a non-trivial seed length is known for this class. We remark that there are some functions computable by constant-width adaptive branching programs but not by sub-exponential-width unordered branching programs. In terms of techniques, we indeed show that the Forbes-Kelly PRG (with the right parameters) from [Forbes-Kelly FOCS 2018] already fools all variants of roBPs above. Our proof adds several new ideas to the original analysis of Forbes-Kelly, and we believe it further demonstrates the versatility of the Forbes-Kelly PRG.

Cite as

Lijie Chen, Xin Lyu, Avishay Tal, and Hongxun Wu. New PRGs for Unbounded-Width/Adaptive-Order Read-Once Branching Programs. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 39:1-39:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2023.39,
  author =	{Chen, Lijie and Lyu, Xin and Tal, Avishay and Wu, Hongxun},
  title =	{{New PRGs for Unbounded-Width/Adaptive-Order Read-Once Branching Programs}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{39:1--39:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.39},
  URN =		{urn:nbn:de:0030-drops-180916},
  doi =		{10.4230/LIPIcs.ICALP.2023.39},
  annote =	{Keywords: pseudorandom generators, derandomization, read-once branching programs}
}
Document
New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method

Authors: Lijie Chen

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
In this paper, we obtain several new results on lower bounds and derandomization for ACC⁰ circuits (constant-depth circuits consisting of AND/OR/MOD_m gates for a fixed constant m, a frontier class in circuit complexity): 1) We prove that any polynomial-time Merlin-Arthur proof system with an ACC⁰ verifier (denoted by MA_{ACC⁰}) can be simulated by a nondeterministic proof system with quasi-polynomial running time and polynomial proof length, on infinitely many input lengths. This improves the previous simulation by [Chen, Lyu, and Williams, FOCS 2020], which requires both quasi-polynomial running time and proof length. 2) We show that MA_{ACC⁰} cannot be computed by fixed-polynomial-size ACC⁰ circuits, and our hard languages are hard on a sufficiently dense set of input lengths. 3) We show that NEXP (nondeterministic exponential-time) does not have ACC⁰ circuits of sub-half-exponential size, improving the previous sub-third-exponential size lower bound for NEXP against ACC⁰ by [Williams, J. ACM 2014]. Combining our first and second results gives a conceptually simpler and derandomization-centric proof of the recent breakthrough result NQP := NTIME[2^polylog(n)] ̸ ⊂ ACC⁰ by [Murray and Williams, SICOMP 2020]: Instead of going through an easy witness lemma as they did, we first prove an ACC⁰ lower bound for a subclass of MA, and then derandomize that subclass into NQP, while retaining its hardness against ACC⁰. Moreover, since our derandomization of MA_{ACC⁰} achieves a polynomial proof length, we indeed prove that nondeterministic quasi-polynomial-time with n^ω(1) nondeterminism bits (denoted as NTIMEGUESS[2^polylog(n), n^ω(1)]) has no poly(n)-size ACC⁰ circuits, giving a new proof of a result by Vyas. Combining with a win-win argument based on randomized encodings from [Chen and Ren, STOC 2020], we also prove that NTIMEGUESS[2^polylog(n), n^ω(1)] cannot be 1/2+1/poly(n)-approximated by poly(n)-size ACC⁰ circuits, improving the recent strongly average-case lower bounds for NQP against ACC⁰ by [Chen and Ren, STOC 2020]. One interesting technical ingredient behind our second result is the construction of a PSPACE-complete language that is paddable, downward self-reducible, same-length checkable, and weakly error correctable. Moreover, all its reducibility properties have corresponding AC⁰[2] non-adaptive oracle circuits. Our construction builds and improves upon similar constructions from [Trevisan and Vadhan, Complexity 2007] and [Chen, FOCS 2019], which all require at least TC⁰ oracle circuits for implementing these properties.

Cite as

Lijie Chen. New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen:LIPIcs.ITCS.2023.34,
  author =	{Chen, Lijie},
  title =	{{New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.34},
  URN =		{urn:nbn:de:0030-drops-175373},
  doi =		{10.4230/LIPIcs.ITCS.2023.34},
  annote =	{Keywords: Circuit Lower Bounds, Derandomization, Algorithmic Method, ACC}
}
Document
Black-Box Constructive Proofs Are Unavoidable

Authors: Lijie Chen, Ryan Williams, and Tianqi Yang

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Following Razborov and Rudich, a "natural property" for proving a circuit lower bound satisfies three axioms: constructivity, largeness, and usefulness. In 2013, Williams proved that for any reasonable circuit class C, NEXP ⊂ C is equivalent to the existence of a constructive property useful against C. Here, a property is constructive if it can be decided in poly(N) time, where N = 2ⁿ is the length of the truth-table of the given n-input function. Recently, Fan, Li, and Yang initiated the study of black-box natural properties, which require a much stronger notion of constructivity, called black-box constructivity: the property should be decidable in randomized polylog(N) time, given oracle access to the n-input function. They showed that most proofs based on random restrictions yield black-box natural properties, and demonstrated limitations on what black-box natural properties can prove. In this paper, perhaps surprisingly, we prove that the equivalence of Williams holds even with this stronger notion of black-box constructivity: for any reasonable circuit class C, NEXP ⊂ C is equivalent to the existence of a black-box constructive property useful against C. The main technical ingredient in proving this equivalence is a smooth, strong, and locally-decodable probabilistically checkable proof (PCP), which we construct based on a recent work by Paradise. As a by-product, we show that average-case witness lower bounds for PCP verifiers follow from NEXP lower bounds. We also show that randomness is essential in the definition of black-box constructivity: we unconditionally prove that there is no deterministic polylog(N)-time constructive property that is useful against even polynomial-size AC⁰ circuits.

Cite as

Lijie Chen, Ryan Williams, and Tianqi Yang. Black-Box Constructive Proofs Are Unavoidable. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 35:1-35:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2023.35,
  author =	{Chen, Lijie and Williams, Ryan and Yang, Tianqi},
  title =	{{Black-Box Constructive Proofs Are Unavoidable}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{35:1--35:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.35},
  URN =		{urn:nbn:de:0030-drops-175380},
  doi =		{10.4230/LIPIcs.ITCS.2023.35},
  annote =	{Keywords: Circuit lower bounds, natural proofs, probabilistic checkable proofs}
}
Document
Extremely Efficient Constructions of Hash Functions, with Applications to Hardness Magnification and PRFs

Authors: Lijie Chen, Jiatu Li, and Tianqi Yang

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
In a recent work, Fan, Li, and Yang (STOC 2022) constructed a family of almost-universal hash functions such that each function in the family is computable by (2n + o(n))-gate circuits of fan-in 2 over the B₂ basis. Applying this family, they established the existence of pseudorandom functions computable by circuits of the same complexity, under the standard assumption that OWFs exist. However, a major disadvantage of the hash family construction by Fan, Li, and Yang (STOC 2022) is that it requires a seed length of poly(n), which limits its potential applications. We address this issue by giving an improved construction of almost-universal hash functions with seed length polylog(n), such that each function in the family is computable with POLYLOGTIME-uniform (2n + o(n))-gate circuits. Our new construction has the following applications in both complexity theory and cryptography. - (Hardness magnification). Let α : ℕ → ℕ be any function such that α(n) ≤ log n / log log n. We show that if there is an n^{α(n)}-sparse NP language that does not have probabilistic circuits of 2n + O(n/log log n) gates, then we have (1) NTIME[2ⁿ] ⊈ SIZE[2^{n^{1/5}}] and (2) NP ⊈ SIZE[n^k] for every constant k. Complementing this magnification phenomenon, we present an O(n)-sparse language in P which requires probabilistic circuits of size at least 2n - 2. This is the first result in hardness magnification showing that even a sub-linear additive improvement on known circuit size lower bounds would imply NEXP ⊄ P_{/poly}. Following Chen, Jin, and Williams (STOC 2020), we also establish a sharp threshold for explicit obstructions: we give an explict obstruction against (2n-2)-size circuits, and prove that a sub-linear additive improvement on the circuit size would imply (1) DTIME[2ⁿ] ⊈ SIZE[2^{n^{1/5}}] and (2) P ⊈ SIZE[n^k] for every constant k. - (Extremely efficient construction of pseudorandom functions). Assuming that one of integer factoring, decisional Diffie-Hellman, or ring learning-with-errors is sub-exponentially hard, we show the existence of pseudorandom functions computable by POLYLOGTIME-uniform AC⁰[2] circuits with 2n + o(n) wires, with key length polylog(n). We also show that PRFs computable by POLYLOGTIME-uniform B₂ circuits of 2n + o(n) gates follows from the existence of sub-exponentially secure one-way functions.

Cite as

Lijie Chen, Jiatu Li, and Tianqi Yang. Extremely Efficient Constructions of Hash Functions, with Applications to Hardness Magnification and PRFs. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 23:1-23:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2022.23,
  author =	{Chen, Lijie and Li, Jiatu and Yang, Tianqi},
  title =	{{Extremely Efficient Constructions of Hash Functions, with Applications to Hardness Magnification and PRFs}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{23:1--23:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.23},
  URN =		{urn:nbn:de:0030-drops-165852},
  doi =		{10.4230/LIPIcs.CCC.2022.23},
  annote =	{Keywords: Almost universal hash functions, hardness magnification, pseudorandom functions}
}
Document
Improved Merlin-Arthur Protocols for Central Problems in Fine-Grained Complexity

Authors: Shyan Akmal, Lijie Chen, Ce Jin, Malvika Raj, and Ryan Williams

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
In a Merlin-Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin-Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include: - Certifying that a list of n integers has no 3-SUM solution can be done in Merlin-Arthur time Õ(n). Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in Õ(n^{1.5}) time (that is, there is a proof system with proofs of length Õ(n^{1.5}) and a deterministic verifier running in Õ(n^{1.5}) time). - Counting the number of k-cliques with total edge weight equal to zero in an n-node graph can be done in Merlin-Arthur time Õ(n^{⌈ k/2⌉}) (where k ≥ 3). For odd k, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in an m-edge graph can be done in Merlin-Arthur time Õ(m). Previous Merlin-Arthur protocols by Williams [CCC'16] and Björklund and Kaski [PODC'16] could only count k-cliques in unweighted graphs, and had worse running times for small k. - Computing the All-Pairs Shortest Distances matrix for an n-node graph can be done in Merlin-Arthur time Õ(n²). Note this is optimal, as the matrix can have Ω(n²) nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an Õ(n^{2.94}) nondeterministic time algorithm. - Certifying that an n-variable k-CNF is unsatisfiable can be done in Merlin-Arthur time 2^{n/2 - n/O(k)}. We also observe an algebrization barrier for the previous 2^{n/2}⋅ poly(n)-time Merlin-Arthur protocol of R. Williams [CCC'16] for #SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for k-UNSAT running in 2^{n/2}/n^{ω(1)} time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol. - Certifying a Quantified Boolean Formula is true can be done in Merlin-Arthur time 2^{4n/5}⋅ poly(n). Previously, the only nontrivial result known along these lines was an Arthur-Merlin-Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in 2^{2n/3}⋅poly(n) time. Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution to n integers can be done in Merlin-Arthur time 2^{n/3}⋅poly(n), improving on the previous best protocol by Nederlof [IPL 2017] which took 2^{0.49991n}⋅poly(n) time.

Cite as

Shyan Akmal, Lijie Chen, Ce Jin, Malvika Raj, and Ryan Williams. Improved Merlin-Arthur Protocols for Central Problems in Fine-Grained Complexity. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 3:1-3:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{akmal_et_al:LIPIcs.ITCS.2022.3,
  author =	{Akmal, Shyan and Chen, Lijie and Jin, Ce and Raj, Malvika and Williams, Ryan},
  title =	{{Improved Merlin-Arthur Protocols for Central Problems in Fine-Grained Complexity}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{3:1--3:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.3},
  URN =		{urn:nbn:de:0030-drops-155991},
  doi =		{10.4230/LIPIcs.ITCS.2022.3},
  annote =	{Keywords: Fine-grained complexity, Merlin-Arthur proofs}
}
Document
Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions

Authors: Lijie Chen, Shuichi Hirahara, and Neekon Vafa

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH: - NTIME[n] cannot be solved in quasi-linear time on average if UP ̸ ⊆ DTIME[2^{Õ(√n)}]. - Σ₂TIME[n] cannot be solved in quasi-linear time on average if Σ_kSAT cannot be solved in time 2^{Õ(√n)} for some constant k. Previously, it was not known if even average-case hardness of Σ₃SAT implies the average-case hardness of Σ₂TIME[n]. - Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+ε}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+ε} for some constant ε > 0. Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest.

Cite as

Lijie Chen, Shuichi Hirahara, and Neekon Vafa. Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 45:1-45:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2022.45,
  author =	{Chen, Lijie and Hirahara, Shuichi and Vafa, Neekon},
  title =	{{Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{45:1--45:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.45},
  URN =		{urn:nbn:de:0030-drops-156411},
  doi =		{10.4230/LIPIcs.ITCS.2022.45},
  annote =	{Keywords: Average-case complexity, worst-case to average-case reduction}
}
Document
Track A: Algorithms, Complexity and Games
Majority vs. Approximate Linear Sum and Average-Case Complexity Below NC¹

Authors: Lijie Chen, Zhenjian Lu, Xin Lyu, and Igor C. Oliveira

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We develop a general framework that characterizes strong average-case lower bounds against circuit classes 𝒞 contained in NC¹, such as AC⁰[⊕] and ACC⁰. We apply this framework to show: - Generic seed reduction: Pseudorandom generators (PRGs) against 𝒞 of seed length ≤ n -1 and error ε(n) = n^{-ω(1)} can be converted into PRGs of sub-polynomial seed length. - Hardness under natural distributions: If 𝖤 (deterministic exponential time) is average-case hard against 𝒞 under some distribution, then 𝖤 is average-case hard against 𝒞 under the uniform distribution. - Equivalence between worst-case and average-case hardness: Worst-case lower bounds against MAJ∘𝒞 for problems in 𝖤 are equivalent to strong average-case lower bounds against 𝒞. This can be seen as a certain converse to the Discriminator Lemma [Hajnal et al., JCSS'93]. These results were not known to hold for circuit classes that do not compute majority. Additionally, we prove that classical and recent approaches to worst-case lower bounds against ACC⁰ via communication lower bounds for NOF multi-party protocols [Håstad and Goldmann, CC'91; Razborov and Wigderson, IPL'93] and Torus polynomials degree lower bounds [Bhrushundi et al., ITCS'19] also imply strong average-case hardness against ACC⁰ under the uniform distribution. Crucial to these results is the use of non-black-box hardness amplification techniques and the interplay between Majority (MAJ) and Approximate Linear Sum (SUM̃) gates. Roughly speaking, while a MAJ gate outputs 1 when the sum of the m input bits is at least m/2, a SUM̃ gate computes a real-valued bounded weighted sum of the input bits and outputs 1 (resp. 0) if the sum is close to 1 (resp. close to 0), with the promise that one of the two cases always holds. As part of our framework, we explore ideas introduced in [Chen and Ren, STOC'20] to show that, for the purpose of proving lower bounds, a top layer MAJ gate is equivalent to a (weaker) SUM̃ gate. Motivated by this result, we extend the algorithmic method and establish stronger lower bounds against bounded-depth circuits with layers of MAJ and SUM̃ gates. Among them, we prove that: - Lower bound: NQP does not admit fixed quasi-polynomial size MAJ∘SUM̃∘ACC⁰∘THR circuits. This is the first explicit lower bound against circuits with distinct layers of MAJ, SUM̃, and THR gates. Consequently, if the aforementioned equivalence between MAJ and SUM̃ as a top gate can be extended to intermediate layers, long sought-after lower bounds against the class THR∘THR of depth-2 polynomial-size threshold circuits would follow.

Cite as

Lijie Chen, Zhenjian Lu, Xin Lyu, and Igor C. Oliveira. Majority vs. Approximate Linear Sum and Average-Case Complexity Below NC¹. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 51:1-51:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2021.51,
  author =	{Chen, Lijie and Lu, Zhenjian and Lyu, Xin and Oliveira, Igor C.},
  title =	{{Majority vs. Approximate Linear Sum and Average-Case Complexity Below NC¹}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{51:1--51:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.51},
  URN =		{urn:nbn:de:0030-drops-141202},
  doi =		{10.4230/LIPIcs.ICALP.2021.51},
  annote =	{Keywords: circuit complexity, average-case hardness, complexity lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Near-Optimal Two-Pass Streaming Algorithm for Sampling Random Walks over Directed Graphs

Authors: Lijie Chen, Gillat Kol, Dmitry Paramonov, Raghuvansh R. Saxena, Zhao Song, and Huacheng Yu

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
For a directed graph G with n vertices and a start vertex u_start, we wish to (approximately) sample an L-step random walk over G starting from u_start with minimum space using an algorithm that only makes few passes over the edges of the graph. This problem found many applications, for instance, in approximating the PageRank of a webpage. If only a single pass is allowed, the space complexity of this problem was shown to be Θ̃(n ⋅ L). Prior to our work, a better space complexity was only known with Õ(√L) passes. We essentially settle the space complexity of this random walk simulation problem for two-pass streaming algorithms, showing that it is Θ̃(n ⋅ √L), by giving almost matching upper and lower bounds. Our lower bound argument extends to every constant number of passes p, and shows that any p-pass algorithm for this problem uses Ω̃(n ⋅ L^{1/p}) space. In addition, we show a similar Θ̃(n ⋅ √L) bound on the space complexity of any algorithm (with any number of passes) for the related problem of sampling an L-step random walk from every vertex in the graph.

Cite as

Lijie Chen, Gillat Kol, Dmitry Paramonov, Raghuvansh R. Saxena, Zhao Song, and Huacheng Yu. Near-Optimal Two-Pass Streaming Algorithm for Sampling Random Walks over Directed Graphs. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 52:1-52:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2021.52,
  author =	{Chen, Lijie and Kol, Gillat and Paramonov, Dmitry and Saxena, Raghuvansh R. and Song, Zhao and Yu, Huacheng},
  title =	{{Near-Optimal Two-Pass Streaming Algorithm for Sampling Random Walks over Directed Graphs}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{52:1--52:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.52},
  URN =		{urn:nbn:de:0030-drops-141218},
  doi =		{10.4230/LIPIcs.ICALP.2021.52},
  annote =	{Keywords: streaming algorithms, random walk sampling}
}
Document
On Distributed Differential Privacy and Counting Distinct Elements

Authors: Lijie Chen, Badih Ghazi, Ravi Kumar, and Pasin Manurangsi

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We study the setup where each of n users holds an element from a discrete set, and the goal is to count the number of distinct elements across all users, under the constraint of (ε,δ)-differentially privacy: - In the non-interactive local setting, we prove that the additive error of any protocol is Ω(n) for any constant ε and for any δ inverse polynomial in n. - In the single-message shuffle setting, we prove a lower bound of Ω̃(n) on the error for any constant ε and for some δ inverse quasi-polynomial in n. We do so by building on the moment-matching method from the literature on distribution estimation. - In the multi-message shuffle setting, we give a protocol with at most one message per user in expectation and with an error of Õ(√n) for any constant ε and for any δ inverse polynomial in n. Our protocol is also robustly shuffle private, and our error of √n matches a known lower bound for such protocols. Our proof technique relies on a new notion, that we call dominated protocols, and which can also be used to obtain the first non-trivial lower bounds against multi-message shuffle protocols for the well-studied problems of selection and learning parity. Our first lower bound for estimating the number of distinct elements provides the first ω(√n) separation between global sensitivity and error in local differential privacy, thus answering an open question of Vadhan (2017). We also provide a simple construction that gives Ω̃(n) separation between global sensitivity and error in two-party differential privacy, thereby answering an open question of McGregor et al. (2011).

Cite as

Lijie Chen, Badih Ghazi, Ravi Kumar, and Pasin Manurangsi. On Distributed Differential Privacy and Counting Distinct Elements. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 56:1-56:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2021.56,
  author =	{Chen, Lijie and Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin},
  title =	{{On Distributed Differential Privacy and Counting Distinct Elements}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{56:1--56:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.56},
  URN =		{urn:nbn:de:0030-drops-135953},
  doi =		{10.4230/LIPIcs.ITCS.2021.56},
  annote =	{Keywords: Differential Privacy, Shuffle Model}
}
Document
Beyond Natural Proofs: Hardness Magnification and Locality

Authors: Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Hardness magnification reduces major complexity separations (such as EXP ⊈ NC^1) to proving lower bounds for some natural problem Q against weak circuit models. Several recent works [Igor Carboni Oliveira and Rahul Santhanam, 2018; Dylan M. McKay et al., 2019; Lijie Chen and Roei Tell, 2019; Igor Carboni Oliveira et al., 2019; Lijie Chen et al., 2019; Igor Carboni Oliveira, 2019; Lijie Chen et al., 2019] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than Q, while Q itself is susceptible to lower bounds but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon, and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: - Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich [Alexander A. Razborov and Steven Rudich, 1997]? - Can we adapt known lower bound techniques to establish the desired lower bound for Q? We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit size problem MCSP imply the non-existence of natural proofs. As a corollary of our result, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower bound techniques to prove strong lower bounds via magnification. This is captured by a locality barrier: existing magnification theorems unconditionally show that the problems Q considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification.

Cite as

Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam. Beyond Natural Proofs: Hardness Magnification and Locality. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 70:1-70:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2020.70,
  author =	{Chen, Lijie and Hirahara, Shuichi and Oliveira, Igor C. and Pich, J\'{a}n and Rajgopal, Ninad and Santhanam, Rahul},
  title =	{{Beyond Natural Proofs: Hardness Magnification and Locality}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{70:1--70:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.70},
  URN =		{urn:nbn:de:0030-drops-117550},
  doi =		{10.4230/LIPIcs.ITCS.2020.70},
  annote =	{Keywords: Hardness Magnification, Natural Proofs, Minimum Circuit Size Problem, Circuit Lower Bounds}
}
Document
Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity

Authors: Lijie Chen and R. Ryan Williams

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
We considerably sharpen the known connections between circuit-analysis algorithms and circuit lower bounds, show intriguing equivalences between the analysis of weak circuits and (apparently difficult) circuits, and provide strong new lower bounds for approximately computing Boolean functions with depth-two neural networks and related models. - We develop approaches to proving THR o THR lower bounds (a notorious open problem), by connecting algorithmic analysis of THR o THR to the provably weaker circuit classes THR o MAJ and MAJ o MAJ, where exponential lower bounds have long been known. More precisely, we show equivalences between algorithmic analysis of THR o THR and these weaker classes. The epsilon-error CAPP problem asks to approximate the acceptance probability of a given circuit to within additive error epsilon; it is the "canonical" derandomization problem. We show: - There is a non-trivial (2^n/n^{omega(1)} time) 1/poly(n)-error CAPP algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size MAJ o MAJ. - There is a delta > 0 and a non-trivial SAT (delta-error CAPP) algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size THR o MAJ. Similar results hold for depth-d linear threshold circuits and depth-d MAJORITY circuits. These equivalences are proved via new simulations of THR circuits by circuits with MAJ gates. - We strengthen the connection between non-trivial derandomization (non-trivial CAPP algorithms) for a circuit class C, and circuit lower bounds against C. Previously, [Ben-Sasson and Viola, ICALP 2014] (following [Williams, STOC 2010]) showed that for any polynomial-size class C closed under projections, non-trivial (2^{n}/n^{omega(1)} time) CAPP for OR_{poly(n)} o AND_{3} o C yields NEXP does not have C circuits. We apply Probabilistic Checkable Proofs of Proximity in a new way to show it would suffice to have a non-trivial CAPP algorithm for either XOR_2 o C, AND_2 o C or OR_2 o C. - A direct corollary of the first two bullets is that NEXP does not have THR o THR circuits would follow from either: - a non-trivial delta-error CAPP (or SAT) algorithm for poly(n)-size THR o MAJ circuits, or - a non-trivial 1/poly(n)-error CAPP algorithm for poly(n)-size MAJ o MAJ circuits. - Applying the above machinery, we extend lower bounds for depth-two neural networks and related models [R. Williams, CCC 2018] to weak approximate computations of Boolean functions. For example, for arbitrarily small epsilon > 0, we prove there are Boolean functions f computable in nondeterministic n^{log n} time such that (for infinitely many n) every polynomial-size depth-two neural network N on n inputs (with sign or ReLU activation) must satisfy max_{x in {0,1}^n}|N(x)-f(x)|>1/2-epsilon. That is, short linear combinations of ReLU gates fail miserably at computing f to within close precision. Similar results are proved for linear combinations of ACC o THR circuits, and linear combinations of low-degree F_p polynomials. These results constitute further progress towards THR o THR lower bounds.

Cite as

Lijie Chen and R. Ryan Williams. Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 19:1-19:43, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2019.19,
  author =	{Chen, Lijie and Williams, R. Ryan},
  title =	{{Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{19:1--19:43},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.19},
  URN =		{urn:nbn:de:0030-drops-108419},
  doi =		{10.4230/LIPIcs.CCC.2019.19},
  annote =	{Keywords: PCP of Proximity, Circuit Lower Bounds, Derandomization, Threshold Circuits, ReLU}
}
Document
Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems

Authors: Lijie Chen, Dylan M. McKay, Cody D. Murray, and R. Ryan Williams

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
A frontier open problem in circuit complexity is to prove P^{NP} is not in SIZE[n^k] for all k; this is a necessary intermediate step towards NP is not in P_{/poly}. Previously, for several classes containing P^{NP}, including NP^{NP}, ZPP^{NP}, and S_2 P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove C is not in SIZE[n^k] for all k, we show that C subset P_{/poly} implies a "collapse" D = C for some larger class D, where we already know D is not in SIZE[n^k] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for P^{NP} that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for P^{NP} are equivalent to fixed-polynomial size circuit lower bounds for P^{NP}. That is, P^{NP} is not in SIZE[n^k] for all k if and only if (NP subset P_{/poly} implies PH subset i.o.- P^{NP}_{/n}). Next, we present new consequences of the assumption NP subset P_{/poly}, towards proving similar results for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses. That is, if NP is not in SIZE[n^k] for all k, then for all C in {Parity-P, PP, PSPACE, EXP}, C subset P_{/poly} implies C subset i.o.-NP_{/n^epsilon} for all epsilon > 0. Note that unconditionally, the collapses are only to MA and not NP. We also explore consequences of circuit lower bounds for a sparse language in NP. Among other results, we show if a polynomially-sparse NP language does not have n^{1+epsilon}-size circuits, then MA subset i.o.-NP_{/O(log n)}, MA subset i.o.-P^{NP[O(log n)]}, and NEXP is not in SIZE[2^{o(m)}]. Finally, we observe connections between these results and the "hardness magnification" phenomena described in recent works.

Cite as

Lijie Chen, Dylan M. McKay, Cody D. Murray, and R. Ryan Williams. Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 30:1-30:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2019.30,
  author =	{Chen, Lijie and McKay, Dylan M. and Murray, Cody D. and Williams, R. Ryan},
  title =	{{Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{30:1--30:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.30},
  URN =		{urn:nbn:de:0030-drops-108525},
  doi =		{10.4230/LIPIcs.CCC.2019.30},
  annote =	{Keywords: Karp-Lipton Theorems, Circuit Lower Bounds, Derandomization, Hardness Magnification}
}
Document
Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols

Authors: Lijie Chen and Ruosong Wang

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
In recent years, the polynomial method from circuit complexity has been applied to several fundamental problems and obtains the state-of-the-art running times (e.g., R. Williams's n^3 / 2^{Omega(sqrt{log n})} time algorithm for APSP). As observed in [Alman and Williams, STOC 2017], almost all applications of the polynomial method in algorithm design ultimately rely on certain (probabilistic) low-rank decompositions of the computation matrices corresponding to key subroutines. They suggest that making use of low-rank decompositions directly could lead to more powerful algorithms, as the polynomial method is just one way to derive such a decomposition. Inspired by their observation, in this paper, we study another way of systematically constructing low-rank decompositions of matrices which could be used by algorithms - communication protocols. Since their introduction, it is known that various types of communication protocols lead to certain low-rank decompositions (e.g., P protocols/rank, BQP protocols/approximate rank). These are usually interpreted as approaches for proving communication lower bounds, while in this work we explore the other direction. We have the following two generic algorithmic applications of communication protocols: - Quantum Communication Protocols and Deterministic Approximate Counting. Our first connection is that a fast BQP communication protocol for a function f implies a fast deterministic additive approximate counting algorithm for a related pair counting problem. Applying known BQP communication protocols, we get fast deterministic additive approximate counting algorithms for Count-OV (#OV), Sparse Count-OV and Formula of SYM circuits. In particular, our approximate counting algorithm for #OV runs in near-linear time for all dimensions d = o(log^2 n). Previously, even no truly-subquadratic time algorithm was known for d = omega(log n). - Arthur-Merlin Communication Protocols and Faster Satisfying-Pair Algorithms. Our second connection is that a fast AM^{cc} protocol for a function f implies a faster-than-bruteforce algorithm for f-Satisfying-Pair. Using the classical Goldwasser-Sisper AM protocols for approximating set size, we obtain a new algorithm for approximate Max-IP_{n,c log n} in time n^{2 - 1/O(log c)}, matching the state-of-the-art algorithms in [Chen, CCC 2018]. We also apply our second connection to shed some light on long-standing open problems in communication complexity. We show that if the Longest Common Subsequence (LCS) problem admits a fast (computationally efficient) AM^{cc} protocol (polylog(n) complexity), then polynomial-size Formula-SAT admits a 2^{n - n^{1-delta}} time algorithm for any constant delta > 0, which is conjectured to be unlikely by a recent work [Abboud and Bringmann, ICALP 2018]. The same holds even for a fast (computationally efficient) PH^{cc} protocol.

Cite as

Lijie Chen and Ruosong Wang. Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 23:1-23:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2019.23,
  author =	{Chen, Lijie and Wang, Ruosong},
  title =	{{Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{23:1--23:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.23},
  URN =		{urn:nbn:de:0030-drops-101162},
  doi =		{10.4230/LIPIcs.ITCS.2019.23},
  annote =	{Keywords: Quantum communication protocols, Arthur-Merlin communication protocols, approximate counting, approximate rank}
}
Document
An Improved Algorithm for Incremental DFS Tree in Undirected Graphs

Authors: Lijie Chen, Ran Duan, Ruosong Wang, Hanrui Zhang, and Tianyi Zhang

Published in: LIPIcs, Volume 101, 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)


Abstract
Depth first search (DFS) tree is one of the most well-known data structures for designing efficient graph algorithms. Given an undirected graph G=(V,E) with n vertices and m edges, the textbook algorithm takes O(n+m) time to construct a DFS tree. In this paper, we study the problem of maintaining a DFS tree when the graph is undergoing incremental updates. Formally, we show: Given an arbitrary online sequence of edge or vertex insertions, there is an algorithm that reports a DFS tree in O(n) worst case time per operation, and requires O (min {m log n, n^2}) preprocessing time. Our result improves the previous O(n log^3 n) worst case update time algorithm by Baswana et al. [Baswana et al., 2016] and the O(n log n) time by Nakamura and Sadakane [Nakamura and Sadakane, 2017], and matches the trivial Omega(n) lower bound when it is required to explicitly output a DFS tree. Our result builds on the framework introduced in the breakthrough work by Baswana et al. [Baswana et al., 2016], together with a novel use of a tree-partition lemma by Duan and Zhang [Duan and Zhang, 2016], and the celebrated fractional cascading technique by Chazelle and Guibas [Chazelle and Guibas, 1986a; Chazelle and Guibas, 1986b].

Cite as

Lijie Chen, Ran Duan, Ruosong Wang, Hanrui Zhang, and Tianyi Zhang. An Improved Algorithm for Incremental DFS Tree in Undirected Graphs. In 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 101, pp. 16:1-16:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.SWAT.2018.16,
  author =	{Chen, Lijie and Duan, Ran and Wang, Ruosong and Zhang, Hanrui and Zhang, Tianyi},
  title =	{{An Improved Algorithm for Incremental DFS Tree in Undirected Graphs}},
  booktitle =	{16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)},
  pages =	{16:1--16:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-068-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{101},
  editor =	{Eppstein, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2018.16},
  URN =		{urn:nbn:de:0030-drops-88427},
  doi =		{10.4230/LIPIcs.SWAT.2018.16},
  annote =	{Keywords: DFS tree, fractional cascading, fully dynamic algorithm}
}
Document
Nearly Optimal Separation Between Partially and Fully Retroactive Data Structures

Authors: Lijie Chen, Erik D. Demaine, Yuzhou Gu, Virginia Vassilevska Williams, Yinzhan Xu, and Yuancheng Yu

Published in: LIPIcs, Volume 101, 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)


Abstract
Since the introduction of retroactive data structures at SODA 2004, a major unsolved problem has been to bound the gap between the best partially retroactive data structure (where changes can be made to the past, but only the present can be queried) and the best fully retroactive data structure (where the past can also be queried) for any problem. It was proved in 2004 that any partially retroactive data structure with operation time T_{op}(n,m) can be transformed into a fully retroactive data structure with operation time O(sqrt{m} * T_{op}(n,m)), where n is the size of the data structure and m is the number of operations in the timeline [Demaine et al., 2004]. But it has been open for 14 years whether such a gap is necessary. In this paper, we prove nearly matching upper and lower bounds on this gap for all n and m. We improve the upper bound for n << sqrt m by showing a new transformation with multiplicative overhead n log m. We then prove a lower bound of min {n log m, sqrt m}^{1-o(1)} assuming any of the following conjectures: - Conjecture I: Circuit SAT requires 2^{n - o(n)} time on n-input circuits of size 2^{o(n)}. This conjecture is far weaker than the well-believed SETH conjecture from complexity theory, which asserts that CNF SAT with n variables and O(n) clauses already requires 2^{n-o(n)} time. - Conjecture II: Online (min,+) product between an integer n x n matrix and n vectors requires n^{3 - o(1)} time. This conjecture is weaker than the APSP conjectures widely used in fine-grained complexity. - Conjecture III (3-SUM Conjecture): Given three sets A,B,C of integers, each of size n, deciding whether there exist a in A, b in B, c in C such that a + b + c = 0 requires n^{2 - o(1)} time. This 1995 conjecture [Anka Gajentaan and Mark H. Overmars, 1995] was the first conjecture in fine-grained complexity. Our lower bound construction illustrates an interesting power of fully retroactive queries: they can be used to quickly solve batched pair evaluation. We believe this technique can prove useful for other data structure lower bounds, especially dynamic ones.

Cite as

Lijie Chen, Erik D. Demaine, Yuzhou Gu, Virginia Vassilevska Williams, Yinzhan Xu, and Yuancheng Yu. Nearly Optimal Separation Between Partially and Fully Retroactive Data Structures. In 16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 101, pp. 33:1-33:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.SWAT.2018.33,
  author =	{Chen, Lijie and Demaine, Erik D. and Gu, Yuzhou and Williams, Virginia Vassilevska and Xu, Yinzhan and Yu, Yuancheng},
  title =	{{Nearly Optimal Separation Between Partially and Fully Retroactive Data Structures}},
  booktitle =	{16th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2018)},
  pages =	{33:1--33:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-068-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{101},
  editor =	{Eppstein, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2018.33},
  URN =		{urn:nbn:de:0030-drops-88593},
  doi =		{10.4230/LIPIcs.SWAT.2018.33},
  annote =	{Keywords: retroactive data structure, conditional lower bound}
}
Document
On The Hardness of Approximate and Exact (Bichromatic) Maximum Inner Product

Authors: Lijie Chen

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
In this paper we study the (Bichromatic) Maximum Inner Product Problem (Max-IP), in which we are given sets A and B of vectors, and the goal is to find a in A and b in B maximizing inner product a * b. Max-IP is very basic and serves as the base problem in the recent breakthrough of [Abboud et al., FOCS 2017] on hardness of approximation for polynomial-time problems. It is also used (implicitly) in the argument for hardness of exact l_2-Furthest Pair (and other important problems in computational geometry) in poly-log-log dimensions in [Williams, SODA 2018]. We have three main results regarding this problem. - Characterization of Multiplicative Approximation. First, we study the best multiplicative approximation ratio for Boolean Max-IP in sub-quadratic time. We show that, for Max-IP with two sets of n vectors from {0,1}^{d}, there is an n^{2 - Omega(1)} time (d/log n)^{Omega(1)}-multiplicative-approximating algorithm, and we show this is conditionally optimal, as such a (d/log n)^{o(1)}-approximating algorithm would refute SETH. Similar characterization is also achieved for additive approximation for Max-IP. - 2^{O(log^* n)}-dimensional Hardness for Exact Max-IP Over The Integers. Second, we revisit the hardness of solving Max-IP exactly for vectors with integer entries. We show that, under SETH, for Max-IP with sets of n vectors from Z^{d} for some d = 2^{O(log^* n)}, every exact algorithm requires n^{2 - o(1)} time. With the reduction from [Williams, SODA 2018], it follows that l_2-Furthest Pair and Bichromatic l_2-Closest Pair in 2^{O(log^* n)} dimensions require n^{2 - o(1)} time. - Connection with NP * UPP Communication Protocols. Last, We establish a connection between conditional lower bounds for exact Max-IP with integer entries and NP * UPP communication protocols for Set-Disjointness, parallel to the connection between conditional lower bounds for approximating Max-IP and MA communication protocols for Set-Disjointness. The lower bound in our first result is a direct corollary of the new MA protocol for Set-Disjointness introduced in [Rubinstein, STOC 2018], and our algorithms utilize the polynomial method and simple random sampling. Our second result follows from a new dimensionality self reduction from the Orthogonal Vectors problem for n vectors from {0,1}^{d} to n vectors from Z^{l} where l = 2^{O(log^* d)}, dramatically improving the previous reduction in [Williams, SODA 2018]. The key technical ingredient is a recursive application of Chinese Remainder Theorem. As a side product, we obtain an MA communication protocol for Set-Disjointness with complexity O (sqrt{n log n log log n}), slightly improving the O (sqrt{n} log n) bound [Aaronson and Wigderson, TOCT 2009], and approaching the Omega(sqrt{n}) lower bound [Klauck, CCC 2003]. Moreover, we show that (under SETH) one can apply the O(sqrt{n}) BQP communication protocol for Set-Disjointness to prove near-optimal hardness for approximation to Max-IP with vectors in {-1,1}^d. This answers a question from [Abboud et al., FOCS 2017] in the affirmative.

Cite as

Lijie Chen. On The Hardness of Approximate and Exact (Bichromatic) Maximum Inner Product. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 14:1-14:45, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{chen:LIPIcs.CCC.2018.14,
  author =	{Chen, Lijie},
  title =	{{On The Hardness of Approximate and Exact (Bichromatic) Maximum Inner Product}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{14:1--14:45},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.14},
  URN =		{urn:nbn:de:0030-drops-88752},
  doi =		{10.4230/LIPIcs.CCC.2018.14},
  annote =	{Keywords: Maximum Inner Product, SETH, Hardness of Approximation in P, Fined-Grained Complexity, Hopcroft's Problem, Chinese Remainder Theorem}
}
Document
Complexity-Theoretic Foundations of Quantum Supremacy Experiments

Authors: Scott Aaronson and Lijie Chen

Published in: LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)


Abstract
In the near future, there will likely be special-purpose quantum computers with 40-50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible. First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by by the Quantum AI group at Google. We show that there's a natural average-case hardness assumption, which has nothing to do with sampling, yet implies that no polynomial-time classical algorithm can pass a statistical test that the quantum sampling procedure's outputs do pass. Compared to previous work - for example, on BosonSampling and IQP - the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled. Second, in an attempt to refute our hardness assumption, we give a new algorithm, inspired by Savitch's Theorem, for simulating a general quantum circuit with n qubits and m gates in polynomial space and m^O(n) time. We then discuss why this and other known algorithms fail to refute our assumption. Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem - of the form "if approximate quantum sampling is classically easy, then the polynomial hierarchy collapses" - must be non-relativizing. This sharply contrasts with the situation for exact sampling. Fourth, refuting a conjecture by Aaronson and Ambainis, we show that the Fourier Sampling problem achieves a constant versus linear separation between quantum and randomized query complexities. Fifth, in search of a "happy medium" between black-box and non-black-box arguments, we study quantum supremacy relative to oracles in P/poly. Previous work implies that, if one-way functions exist, then quantum supremacy is possible relative to such oracles. We show, conversely, that some computational assumption is needed: if SampBPP=SampBQP and NP is in BPP, then quantum supremacy is impossible relative to oracles with small circuits.

Cite as

Scott Aaronson and Lijie Chen. Complexity-Theoretic Foundations of Quantum Supremacy Experiments. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 22:1-22:67, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{aaronson_et_al:LIPIcs.CCC.2017.22,
  author =	{Aaronson, Scott and Chen, Lijie},
  title =	{{Complexity-Theoretic Foundations of Quantum Supremacy Experiments}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{22:1--22:67},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.22},
  URN =		{urn:nbn:de:0030-drops-75274},
  doi =		{10.4230/LIPIcs.CCC.2017.22},
  annote =	{Keywords: computational complexity, quantum computing, quantum supremacy}
}
Document
Adaptivity vs. Postselection, and Hardness Amplification for Polynomial Approximation

Authors: Lijie Chen

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
We study the following problem: with the power of postselection (classically or quantumly), what is your ability to answer adaptive queries to certain languages? More specifically, for what kind of computational classes C, we have P^C belongs to PostBPP or PostBQP? While a complete answer to the above question seems impossible given the development of present computational complexity theory. We study the analogous question in query complexity, which sheds light on the limitation of relativized methods (the relativization barrier) to the above question. Informally, we show that, for a partial function f, if there is no efficient small bounded-error algorithm for f classically or quantumly, then there is no efficient postselection bounded-error algorithm to answer adaptive queries to f classically or quantumly. Our results imply a new proof for the classical oracle separation P^{NP^O} notsubset PP^O, which is arguably more elegant. They also lead to a new oracle separation P^{SZK^O} notsubset PP^O, which is close to an oracle separation between SZK and PP - an open problem in the field of oracle separations. Our result also implies a hardness amplification construction for polynomial approximation: given a function f on n bits, we construct an adaptive-version of f, denoted by F, on O(m·n) bits, such that if f requires large degree to approximate to error 2/3 in a certain one-sided sense, then F requires large degree to approximate even to error 1/2 - 2^{-m}. Our construction achieves the same amplification in the work of Thaler (ICALP, 2016), by composing a function with O(log n) deterministic query complexity, which is in sharp contrast to all the previous results where the composing amplifiers are all hard functions in a certain sense.

Cite as

Lijie Chen. Adaptivity vs. Postselection, and Hardness Amplification for Polynomial Approximation. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 26:1-26:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen:LIPIcs.ISAAC.2016.26,
  author =	{Chen, Lijie},
  title =	{{Adaptivity vs. Postselection, and Hardness Amplification for Polynomial Approximation}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{26:1--26:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.26},
  URN =		{urn:nbn:de:0030-drops-67960},
  doi =		{10.4230/LIPIcs.ISAAC.2016.26},
  annote =	{Keywords: approximate degree, postselection, hardness amplification, adaptivity}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail